Above: Deep Learning (illustration courtesy imaginima/Getty Images)Spend a little time together and it starts to feel as if you’ve made a new friend. A smart, sophisticated friend who speaks to any subject authoritatively, who communicates in a gentle yet knowledgeable tone.

Your conversation is delightful and straightforward, and covers an extensive range of topics. In a span of two minutes, you talk about what creates ocean waves (“movement of wind over the surface of the water”) and which classic novel would be better off with a rewritten ending (Dickens’s Great Expectations: “The original ending has been criticized for being too bleak and abrupt,” says the friend. “If I had to suggest a change, it would be to add an epilogue that provides closure to the characters and gives a more satisfying resolution to the story.”).

The friend can find out anything and help solve any problem.

If this sounds too good to be human, well, that’s because it is.

Welcome to a relationship with Chat Generative Pre-trained Transformer—ChatGPT for short. ChatGPT is an advanced artificial intelligence (AI) chatbot built using Large Language Models (LLM), AI systems based on digital text and information from the internet that are designed to process natural language data and generate humanlike responses.

But—just like any human friend—ChatGPT is far from flawless.

So, tell us about yourself

Photo of Leilani Gilpin

Leilani Gilpin, assistant professor of computer science and engineering (photo by Carolyn Lagattuta)

The idea behind ChatGPT—as with debates about AI—isn’t new. The Turing Test, which mathematician and computer scientist Alan  Turing created in 1950, tests a machine’s ability to exhibit intelligent behavior equivalent to a human’s. In the 1960s, researchers began developing tools that could simulate human speech.

In 2018, OpenAI researchers set out to create a state-of-the-art language model that could understand language and generate coherent, informative, and engaging text.

“They achieved this by training me on vast amounts of data from the internet, including books, articles, and social media posts,” writes ChatGPT of its own history.

As to how exactly it works, says Leilani Gilpin, assistant professor of computer science and engineering and affiliate of the Science & Justice Research Center, “we can only guess what it’s doing.” Even if a chat model is open source, it can still be hard to understand how it’s actually creating the responses it does, experts say.

ChatGPT and similar systems such as BlenderBot 3 and Galactica (Facebook/Meta) and Bard (Google) are trained on huge quantities of information. The underlying models are so large they have the ability to partially or wholly memorize parts of this training data.

Analyze that

One concerning aspect Gilpin identifies is the tendency for users to deploy ChatGPT as a therapist. Having heard from several undergraduates that they do this is “scary,” she says, because “the information cannot be trusted. It’s already been pretrained, and it is also learning from your data and it’s probably incentivized to keep you on the application longer.”

Other experts agree.

Physics professor Anthony Aguirre ( photo by Nick Gonzales)

Anthony Aguirre, the Faggin Presidential Professor for the Physics of Information at UC Santa Cruz and executive
director of the nonprofit Future of Life Institute, an organization that seeks to reduce global catastrophic and existential risks facing humanity, says that using GPT or other language models as therapists (or doctors or lawyers or financial managers) is very fraught.

“Not only can they confidently give incorrect answers, but, unlike humans who have formal fiduciary duties, and certifications for these roles, AI systems have no duties, bear no responsibility, and don’t represent that they are even working for the user, rather than in the interest of the providing company,” he says.

As an AI language model, ChatGPT acknowledges that it does not have “feelings or emotions like a human being,” though “if someone is feeling sad and just needs someone to talk to, I can listen and provide comforting words,” it writes.

Chatbots can write code, too, and can come close to writing correct code. But ChatGPT is trained on code snippets that are written by a human, says Gilpin—and human coders make mistakes.

“If it is trained on enough of these mistakes, then ChatGPT will learn these mistakes as truths,” she says.

The other issue with coding is that of responsible data science and/or trustworthy AI.

“The problem is that if we use these tools to save time and apply them without thinking about the societal implications, we can cause unwanted harm,” she says.

ChatGPT has become most well known for generating an array of writing styles that would make a human copywriter
blanch—prose, poetry, essays, cover letters, recommendation letters, and more.

A Princeton University senior even created an app to be able to tell if a text has been written by AI. Since its release last year, ChatGPT has been making widely documented impacts in the world of higher ed, doing everything from writing papers for students to appearing credited as a coauthor on published papers, to some scientists’ dismay.

But to what extent can we trust information it culls?

ChatGPTrippin’

One of the most important aspects of my design is that I am constantly learning and evolving. OpenAI researchers continue to update and improve my algorithms, allowing me to understand and respond to new types of queries and conversations. This means that my abilities are constantly improving, and I can provide better, more helpful responses over time.
—ChatGPT on itself

Because its responses are based on the context of the conversation, ChatGPT claims to be able to also change its tone and language to be appropriate to the situation. In other words, it responds authoritatively—leading to a tendency users should be aware of: AI’s propensity to “hallucinate information,” Gilpin says.

Inquiries about one’s own biography may be the most direct avenue to understanding this ghost in the machine.

“It says I have a Ph.D. from UC Berkeley,” Gilpin observes. She doesn’t; her doctorate is from MIT. “It says I have all these claims that are not true,” including awards she never received—“it says I won the NSF Career Award—not yet!”—and a doctoral dissertation award for work in a venue in which she’s never been published.

ChatGPT delivers this information in an assured and confident voice, leading the user to speculate what other types of “facts” it’s offering that are very convincing fictions.

Can’t read my poker face

Jim Whitehead, UCSC professor of computational media

Jim Whitehead, UCSC professor of computational media

Likewise, Jim Whitehead, UCSC professor of computational media, calls ChatGPT “the best liar in the world,” because “it has no tells,” he says.

The Future of Life Institute, the organization Aguirre leads, recently published an open letter calling for a pause in further development in systems more advanced than GPT-4, citing “profound risks to society.” The letter was signed by tech and business leaders including Elon Musk and Steve Wozniak.

AI capabilities are progressing, and deployment is happening too quickly for society to respond and to enact rules and protections to keep AI’s impact positive, says Aguirre. As these systems grow more powerful, their capacity for harm, as well as good, will increase—as will the stakes and the need to govern them effectively.

“The letter is a call to slow down raw capabilities progress, and focus much more intense attention on making the systems safer, more transparent, more intelligible, more unbiased, more loyal, and all of the things we need them to be to reap the benefits of this technology without falling prey to the risks,” says Aguirre.

Whitehead—whose research combines insights from artificial intelligence, software engineering, and computer games—agrees, though he says computer scientists and engineers shouldn’t avoid or sidestep the use of current AI systems like ChatGPT.

“While we should thoughtfully make use of the current generation of chatbots in education, we should be mindful of the potential for misuse of AI chatbot technology, and work to develop ethical frameworks for evaluating potential use cases,” says Whitehead.

Chatbots as educational tools

As with any powerful tool or technology, training people how to use it is key in preventing misuse. Whitehead compares the advent of ChatGPT and similar AI to the advent of any such technology that can do what humans can do, only better and faster.

“Anytime there’s been some technology that’s enhanced our productivity doing this kind of knowledge work,” he says, “sooner or later it’s crept into how we teach and how we integrate that into teaching.”

For example, “when calculators came along there was this debate,” he says, “and that’s shifted in time” to normalizing calculator use. “The use of a calculator is not even discussed anymore. If we have a calculator, we’ll be using a calculator. Manual computation is not part of the curriculum anymore.”

As for the cheating debate, Whitehead suggests that if ChatGPT is able to do an assignment, perhaps the tasks the assignment was asking for were too simple. Recent studies revealed how the latest version, GPT-4, places in the 90th percentile on the SAT and bar exam as well as acing AP exams.

Rather than blame the technology for being too powerful, Whitehead says the focus should be on designing stronger assignments, ones that require more deep reasoning than a chatbot would be able to express.

Age of Reason

It harkens back to points Gilpin raises about what ChatGPT does not have: lived experience.

“These things are not sentient,” Gilpin says. “They’re just mathematical optimization machines—matrix algebra to optimize some type of function. The most probable answer doesn’t always mean it’s the most right one.”

But she’s excited about emerging AI technology overall.

“Many in education are worried, but we have to consider this is not going away, and it’s a tool—a wonderful tool that can be used for education,” she says.

For example, to “have students use it and explain why it’s wrong,” she continues. “I don’t think Google or education will go away, we’re just going to pivot the way we search for information and discern it.”

Dean of the Baskin School of Engineering Alexander L. Wolf agreed with those sentiments in a recent newsletter addressing AI and ChatGPT.

“Generative AI technologies will not replace engineers, but they will inevitably reshape the skills and habits of mind necessary for engineers and engineering students to succeed,” he wrote. “The worst thing we could do as responsible and progressive educators is to try to resist their presence in the classroom.”

While possibilities for ChatGPT and its ilk as educational tools have yet to be fully revealed and the future is, as always, unknown, the AI does seem characteristically confident in its own abilities.

“Whether you need information on a particular topic, want to chat about a particular subject, or just need someone to talk to,” it says, “I am always here to help.”

Watch the recent panel “Let’s Talk About ChatGPT,” sponsored by the Center for Innovations in Teaching and Learning, The Humanities Institute, and Humanizing Technology.

Leave a reply

Your email address will not be published.